Goto

Collaborating Authors

 nvidia triton inference server


Creating Custom AI Models Using NVIDIA TAO Toolkit with Azure Machine Learning

#artificialintelligence

A fundamental shift is currently taking place in how AI applications are built and deployed. AI applications are becoming more sophisticated and applied to broader use cases. This requires end-to-end AI lifecycle management--from data preparation, to model development and training, to deployment and management of AI apps. This approach can lower upfront costs, improve scalability, and decrease risk for customers using AI applications. While the cloud-native approach to app development can be appealing to developers, machine learning (ML) projects are notoriously time-intensive and cost-intensive, as they require a team with a varied skill set to build and maintain.


Deploy fast and scalable AI with NVIDIA Triton Inference Server in Amazon SageMaker

#artificialintelligence

Machine learning (ML) and deep learning (DL) are becoming effective tools for solving diverse computing problems, from image classification in medical diagnosis, conversational AI in chatbots, to recommender systems in ecommerce. However, ML models that have specific latency or high throughput requirements can become prohibitively expensive to run at scale on generic computing infrastructure. To achieve performance and deliver inference at the lowest cost, ML models require inference accelerators such as GPUs to meet the stringent throughput, scale, and latency requirements businesses and customers expect. The deployment of trained models and accompanying code in the data center, public cloud, or at the edge is called inference serving. We are proud to announce the integration of NVIDIA Triton Inference Server in Amazon SageMaker.